The recent advances in sensing and display technologies have been transforming\nour living environments drastically. In this paper, a new technique is introduced\nto accurately reconstruct indoor environments in three-dimensions\nusing a mobile platform. The system incorporates 4 ultrasonic sensors scanner\nsystem, an HD web camera as well as an inertial measurement unit (IMU).\nThe whole platform is mountable on mobile facilities, such as a wheelchair.\nThe proposed mapping approach took advantage of the precision of the 3D\npoint clouds produced by the ultrasonic sensors system despite their scarcity\nto help build a more definite 3D scene. Using a robust iterative algorithm, it\ncombined the structure from motion generated 3D point clouds with the ultrasonic\nsensors and IMU generated 3D point clouds to derive a much more\nprecise point cloud using the depth measurements from the ultrasonic sensors.\nBecause of their ability to recognize features of objects in the targeted\nscene, the ultrasonic generated point clouds performed feature extraction on\nthe consecutive point cloud to ensure a perfect alignment. The range measured\nby ultrasonic sensors contributed to the depth correction of the generated\n3D images (the 3D scenes). Experiments revealed that the system generated\nnot only dense but precise 3D maps of the environments. The results\nshowed that the designed 3D modeling platform is able to help in assistive\nliving environment for self-navigation, obstacle alert, and other driving assisting\ntasks.
Loading....